913 research outputs found
Markov processes follow from the principle of Maximum Caliber
Markov models are widely used to describe processes of stochastic dynamics.
Here, we show that Markov models are a natural consequence of the dynamical
principle of Maximum Caliber. First, we show that when there are different
possible dynamical trajectories in a time-homogeneous process, then the only
type of process that maximizes the path entropy, for any given singlet
statistics, is a sequence of identical, independently distributed (i.i.d.)
random variables, which is the simplest Markov process. If the data is in the
form of sequentially pairwise statistics, then maximizing the caliber dictates
that the process is Markovian with a uniform initial distribution. Furthermore,
if an initial non-uniform dynamical distribution is known, or multiple
trajectories are conditioned on an initial state, then the Markov process is
still the only one that maximizes the caliber. Second, given a model, MaxCal
can be used to compute the parameters of that model. We show that this
procedure is equivalent to the maximum-likelihood method of inference in the
theory of statistics.Comment: 4 page
A New Approach to Time Domain Classification of Broadband Noise in Gravitational Wave Data
Broadband noise in gravitational wave (GW) detectors, also known as triggers,
can often be a deterrant to the efficiency with which astrophysical search
pipelines detect sources. It is important to understand their instrumental or
environmental origin so that they could be eliminated or accounted for in the
data. Since the number of triggers is large, data mining approaches such as
clustering and classification are useful tools for this task. Classification of
triggers based on a handful of discrete properties has been done in the past. A
rich information content is available in the waveform or 'shape' of the
triggers that has had a rather restricted exploration so far. This paper
presents a new way to classify triggers deriving information from both trigger
waveforms as well as their discrete physical properties using a sequential
combination of the Longest Common Sub-Sequence (LCSS) and LCSS coupled with
Fast Time Series Evaluation (FTSE) for waveform classification and the
multidimensional hierarchical classification (MHC) analysis for the grouping
based on physical properties. A generalized k-means algorithm is used with the
LCSS (and LCSS+FTSE) for clustering the triggers using a validity measure to
determine the correct number of clusters in absence of any prior knowledge. The
results have been demonstrated by simulations and by application to a segment
of real LIGO data from the sixth science run.Comment: 16 pages, 16 figure
Statistical Consequences of Devroye Inequality for Processes. Applications to a Class of Non-Uniformly Hyperbolic Dynamical Systems
In this paper, we apply Devroye inequality to study various statistical
estimators and fluctuations of observables for processes. Most of these
observables are suggested by dynamical systems. These applications concern the
co-variance function, the integrated periodogram, the correlation dimension,
the kernel density estimator, the speed of convergence of empirical measure,
the shadowing property and the almost-sure central limit theorem. We proved in
\cite{CCS} that Devroye inequality holds for a class of non-uniformly
hyperbolic dynamical systems introduced in \cite{young}. In the second appendix
we prove that, if the decay of correlations holds with a common rate for all
pairs of functions, then it holds uniformly in the function spaces. In the last
appendix we prove that for the subclass of one-dimensional systems studied in
\cite{young} the density of the absolutely continuous invariant measure belongs
to a Besov space.Comment: 33 pages; companion of the paper math.DS/0412166; corrected version;
to appear in Nonlinearit
On the entropy production of time series with unidirectional linearity
There are non-Gaussian time series that admit a causal linear autoregressive
moving average (ARMA) model when regressing the future on the past, but not
when regressing the past on the future. The reason is that, in the latter case,
the regression residuals are only uncorrelated but not statistically
independent of the future. In previous work, we have experimentally verified
that many empirical time series indeed show such a time inversion asymmetry.
For various physical systems, it is known that time-inversion asymmetries are
linked to the thermodynamic entropy production in non-equilibrium states. Here
we show that such a link also exists for the above unidirectional linearity.
We study the dynamical evolution of a physical toy system with linear
coupling to an infinite environment and show that the linearity of the dynamics
is inherited to the forward-time conditional probabilities, but not to the
backward-time conditionals. The reason for this asymmetry between past and
future is that the environment permanently provides particles that are in a
product state before they interact with the system, but show statistical
dependencies afterwards. From a coarse-grained perspective, the interaction
thus generates entropy. We quantitatively relate the strength of the
non-linearity of the backward conditionals to the minimal amount of entropy
generation.Comment: 16 page
Precursors of extreme increments
We investigate precursors and predictability of extreme increments in a time
series. The events we are focusing on consist in large increments within
successive time steps. We are especially interested in understanding how the
quality of the predictions depends on the strategy to choose precursors, on the
size of the event and on the correlation strength. We study the prediction of
extreme increments analytically in an AR(1) process, and numerically in wind
speed recordings and long-range correlated ARMA data. We evaluate the success
of predictions via receiver operator characteristics (ROC-curves). Furthermore,
we observe an increase of the quality of predictions with increasing event size
and with decreasing correlation in all examples. Both effects can be understood
by using the likelihood ratio as a summary index for smooth ROC-curves
Unfolding dynamics of proteins under applied force
Understanding the mechanisms of protein folding is a major challenge that is being addressed effectively by collaboration between researchers in the physical and life sciences. Recently, it has become possible to mechanically unfold proteins by pulling on their two termini using local force probes such as the atomic force microscope. Here, we present data from experiments in which synthetic protein polymers designed to mimic naturally occurring polyproteins have been mechanically unfolded. For many years protein folding dynamics have been studied using chemical denaturation, and we therefore firstly discuss our mechanical unfolding data in the context of such experiments and show that the two unfolding mechanisms are not the same, at least for the proteins studied here. We also report unexpected observations that indicate a history effect in the observed unfolding forces of polymeric proteins and explain this in terms of the changing number of domains remaining to unfold and the increasing compliance of the lengthening unstructured polypeptide chain produced each time a domain unfolds
Random walks - a sequential approach
In this paper sequential monitoring schemes to detect nonparametric drifts
are studied for the random walk case. The procedure is based on a kernel
smoother. As a by-product we obtain the asymptotics of the Nadaraya-Watson
estimator and its as- sociated sequential partial sum process under
non-standard sampling. The asymptotic behavior differs substantially from the
stationary situation, if there is a unit root (random walk component). To
obtain meaningful asymptotic results we consider local nonpara- metric
alternatives for the drift component. It turns out that the rate of convergence
at which the drift vanishes determines whether the asymptotic properties of the
monitoring procedure are determined by a deterministic or random function.
Further, we provide a theoretical result about the optimal kernel for a given
alternative
Emission-aware Energy Storage Scheduling for a Greener Grid
Reducing our reliance on carbon-intensive energy sources is vital for
reducing the carbon footprint of the electric grid. Although the grid is seeing
increasing deployments of clean, renewable sources of energy, a significant
portion of the grid demand is still met using traditional carbon-intensive
energy sources. In this paper, we study the problem of using energy storage
deployed in the grid to reduce the grid's carbon emissions. While energy
storage has previously been used for grid optimizations such as peak shaving
and smoothing intermittent sources, our insight is to use distributed storage
to enable utilities to reduce their reliance on their less efficient and most
carbon-intensive power plants and thereby reduce their overall emission
footprint. We formulate the problem of emission-aware scheduling of distributed
energy storage as an optimization problem, and use a robust optimization
approach that is well-suited for handling the uncertainty in load predictions,
especially in the presence of intermittent renewables such as solar and wind.
We evaluate our approach using a state of the art neural network load
forecasting technique and real load traces from a distribution grid with 1,341
homes. Our results show a reduction of >0.5 million kg in annual carbon
emissions -- equivalent to a drop of 23.3% in our electric grid emissions.Comment: 11 pages, 7 figure, This paper will appear in the Proceedings of the
ACM International Conference on Future Energy Systems (e-Energy 20) June
2020, Australi
First results from 2+1-Flavor Domain Wall QCD: Mass Spectrum, Topology Change and Chiral Symmetry with
We present results for the static interquark potential, light meson and
baryon masses, and light pseudoscalar meson decay constants obtained from
simulations of domain wall QCD with one dynamical flavour approximating the
quark, and two degenerate dynamical flavours with input bare masses ranging
from to approximating the and quarks. We compare these
quantities obtained using the Iwasaki and DBW2 improved gauge actions, and
actions with larger rectangle coefficients, on lattices. We seek
parameter values at which both the chiral symmetry breaking residual mass due
to the finite lattice extent in the fifth dimension and the Monte Carlo time
history for topological charge are acceptable for this set of quark masses at
lattice spacings above 0.1 fm. We find that the Iwasaki gauge action is best,
demonstrating the feasibility of using QCDOC to generate ensembles which are
good representations of the QCD path integral on lattices of up to 3 fm in
spatial extent with lattice spacings in the range 0.09-0.13 fm. Despite large
residual masses and a limited number of sea quark mass values with which to
perform chiral extrapolations, our results for light hadronic physics scale and
agree with experimental measurements within our statistical uncertainties.Comment: RBC and UKQCD Collaborations. 82 pages, 34 figures Typos correcte
An Adaptive Interacting Wang-Landau Algorithm for Automatic Density Exploration
While statisticians are well-accustomed to performing exploratory analysis in
the modeling stage of an analysis, the notion of conducting preliminary
general-purpose exploratory analysis in the Monte Carlo stage (or more
generally, the model-fitting stage) of an analysis is an area which we feel
deserves much further attention. Towards this aim, this paper proposes a
general-purpose algorithm for automatic density exploration. The proposed
exploration algorithm combines and expands upon components from various
adaptive Markov chain Monte Carlo methods, with the Wang-Landau algorithm at
its heart. Additionally, the algorithm is run on interacting parallel chains --
a feature which both decreases computational cost as well as stabilizes the
algorithm, improving its ability to explore the density. Performance is studied
in several applications. Through a Bayesian variable selection example, the
authors demonstrate the convergence gains obtained with interacting chains. The
ability of the algorithm's adaptive proposal to induce mode-jumping is
illustrated through a trimodal density and a Bayesian mixture modeling
application. Lastly, through a 2D Ising model, the authors demonstrate the
ability of the algorithm to overcome the high correlations encountered in
spatial models.Comment: 33 pages, 20 figures (the supplementary materials are included as
appendices
- …